Additive Regularization: Fusion of Training and Validation Levels in Kernel Methods

نویسندگان

  • K. Pelckmans
  • J.A.K. Suykens
  • B. De Moor
  • Johan Suykens
چکیده

In this paper the training of Least Squares Support Vector Machines (LS-SVMs) for classification and regression and the determination of its regularization constants is reformulated in terms of additive regularization. In contrast with the classical Tikhonov scheme, a major advantage of this additive regularization mechanism is that it enables to achieve computational fusion of the training and validation levels leading to the solution of one single set of linear equations that characterizes the training and validation at once. The problem of avoiding overfitting on validation data is approached by restricting explicitly the degrees of freedom of the regularization constants. Different restriction schemes are investigated, including an ensemble model approach. The link between the Tikhonov scheme and additive regularization is explained and an efficient cross-validation method with additive regularization is proposed. The new methods are illustrated with several examples on synthetic and real-life data sets.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convex optimization for the design of learning machines

! Pelckmans K., Suykens J.A.K., De Moor B., ``Building Sparse Representations and Structure Determination on LS-SVM Substrates'', Neurocomputing Special Issue, Vol. 64, pp. 137-159, march, 2005. ! Pelckmans K., Goethals I., De Brabanter J., Suykens J.A.K., De Moor B., ``Componentwise Least Squares Support Vector Machines'', Chapter in Support Vector Machines: Theory and Applications, (Wang L., ...

متن کامل

Sparse LS-SVMs using additive regularization with a penalized validation criterion

This paper is based on a new way for determining the regularization trade-off in least squares support vector machines (LS-SVMs) via a mechanism of additive regularization which has been recently introduced in [6]. This framework enables computational fusion of training and validation levels and allows to train the model together with finding the regularization constants by solving a single lin...

متن کامل

Componentwise Least Squares Support Vector Machines

This chapter describes componentwise Least Squares Support Vector Machines (LS-SVMs) for the estimation of additive models consisting of a sum of nonlinear components. The primal-dual derivations characterizing LS-SVMs for the estimation of the additive model result in a single set of linear equations with size growing in the number of data-points. The derivation is elaborated for the classific...

متن کامل

Approximate Regularization Paths for `2-loss Support Vector Machines

We consider approximate regularization paths for kernel methods and in particular `2-loss Support Vector Machines (SVMs). We provide a simple and efficient framework for maintaining an εapproximate solution (and a corresponding ε-coreset) along the entire regularization path. We prove correctness and also practical efficiency our method. Unlike previous algorithms our algorithm does not need an...

متن کامل

Simultaneous Model Selection and Optimization through Parameter-free Stochastic Learning

Stochastic gradient descent algorithms for training linear and kernel predictors are gaining more andmore importance, thanks to their scalability. While various methods have been proposed to speed up theirconvergence, the model selection phase is often ignored. In fact, in theoretical works most of the timeassumptions are made, for example, on the prior knowledge of the norm of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003